134 research outputs found

    Musical Expertise and Statistical Learning of Musical and Linguistic Structures

    Get PDF
    Adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. Here we present a review of a series of electrophysiological studies investigating (1) Speech segmentation resulting from exposure to spoken and sung sequences (2) The extraction of linguistic versus musical information from a sung sequence (3) Differences between musicians and non-musicians in both linguistic and musical dimensions. The results show that segmentation is better after exposure to sung compared to spoken material and moreover, that linguistic structure is better learned than the musical structure when using sung material. In addition, musical expertise facilitates the learning of both linguistic and musical structures. Finally, an electrophysiological approach, which directly measures brain activity, appears to be more sensitive than a behavioral one

    Faster Sound Stream Segmentation In Musicians Than In Nonmusicians

    Get PDF
    The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general

    Language and Speech Rhythmic Abilities Correlate with L2 Prosody Imitation Abilities in Typologically Different Languages

    Get PDF
    International audienceWhile many studies have demonstrated the relationship between musical rhythm and speech prosody, this has been rarely addressed in the context of second language (L2) acquisition. Here, we investigated whether musical rhythmic skills and the production of L2 speech prosody are predictive of one another. We tested both musical and linguistic rhythmic competences of 23 native French speakers of L2 English. Participants completed perception and production music and language tests. In the prosody production test, sentences containing trisyllabic words with either a prominence on the first or on the second syllable were heard and had to be reproduced. Participants were less accurate in reproducing penultimate accent placement. Moreover, the accuracy in reproducing phonologically disfavored stress patterns was best predicted by rhythm production abilities. Our results show, for the first time, that better reproduction of musical rhythmic sequences is predictive of a more successful realization of unfamiliar L2 prosody, specifically in terms of stress-accent placement

    Apprentissage implicite des structures linguistiques et musicales (approche multi-méthodologique)

    Get PDF
    Les objectifs de cette thèse sont multiples. Le premier objectif est de comparer, aux niveaux comportemental et électrophysiologique, l'apprentissage implicite de structures linguistiques et musicales après l'écoute d'un langage artificiel chanté. Alors qu'au niveau comportemental, seule la structure linguistique semble être apprise, les résultats électrophysiologiques révèlent un effet N400 pour les deux dimensions, linguistique et musicale. Le deuxième objectif de cette thèse est d'évaluer comment cet apprentissage est influencé par l'expertise musicale. Nous avons comparé un groupe d'adultes musiciens à un groupe de non musiciens. Alors qu'au niveau comportemental les musiciens sont à peine meilleurs que les non musiciens dans les deux dimensions, les données électrophysiologiques révèlent, via des différences précoces (N1/P2) et tardives (N400), une meilleure segmentation chez les musiciens. De plus, les analyses en potentiels évoqués et en temps-fréquences des données électrophysiologiques enregistrées pendant les phases d'apprentissage révèlent que les musiciens apprennent plus rapidement que les non musiciens. Cependant, un lien de causalité quant aux effets de l'apprentissage de la musique ne peut être mis en évidence qu'en réalisant une étude longitudinale. Nous avons mené une telle étude chez des enfants de 8 ans à qui l'on a fait suivre un apprentissage de la musique ou de la peinture pendant 2 années. Les résultats comportementaux et électrophysiologiques révèlent un large bénéfice de l'apprentissage musical comparé à celui de la peinture démontrant l'importance de la musique dans l'éducation des enfants.The aims of the present thesis were two-folded. Firstly, we wanted to compare behavioral and electrophysiological measures related to the implicit learning of linguistic and musical structures contained within an artificial sung language. While behavioral measures suggest that only the linguistic structure was learned, electrophysiological data revealed similar N400 effects in both linguistic and musical dimensions, suggesting that participants did also learn the musical structure. The second goal was to evaluate to what extent musical expertise can affect speech segmentation. At this aim, we compared a group of adult musicians to a group of nonmusicians. While behavioral data showed that musicians had marginally better performance than non musicians in both dimensions, electrophysiological data revealed, via early (N1/P2) and late (N400) differences, a better speech segmentation in musicians than in non musicians. Moreover, event-related potentials and time-frequency analyzes during learning revealed a faster and more efficient learning process in musicians. However, the only way to unambiguously claim causality between expertise and the observed effects requires a longitudinal approach. At this aim, we conducted a study with 8 year-old children who followed either music or painting lessons over a period of 2 years. Behavioral and electrophysiological data revealed a larger benefit of musical compared to painting training, bringing evidences for the importance of music in childrens' education.AIX-MARSEILLE2-Bib.electronique (130559901) / SudocSudocFranceF

    Orthographic Contamination of Broca’s Area

    Get PDF
    Strong evidence has accumulated over the past years suggesting that orthography plays a role in spoken language processing. It is still unclear, however, whether the influence of orthography on spoken language results from a co-activation of posterior brain areas dedicated to low-level orthographic processing or whether it results from orthographic restructuring of phonological representations located in the anterior perisylvian speech network itself. To test these hypotheses, we ran a fMRI study that tapped orthographic processing in the visual and auditory modalities. As a marker for orthographic processing, we used the orthographic decision task in the visual modality and the orthographic consistency effect in the auditory modality. Results showed no specific orthographic activation neither for the visual nor the auditory modality in left posterior occipito-temporal brain areas that are thought to host the visual word form system. In contrast, specific orthographic activation was found both for the visual and auditory modalities at anterior sites belonging to the perisylvian region: the left dorsal–anterior insula and the left inferior frontal gyrus. These results are in favor of the restructuring hypothesis according to which learning to read acts like a “virus” that permanently contaminates the spoken language system

    Metrical Presentation Boosts Implicit Learning Of Artificial Grammar

    Get PDF
    The present study investigated whether a temporal hierarchical structure favors implicit learning. An artificial pitch grammar implemented with a set of tones was presented in two different temporal contexts, notably with either a strongly metrical structure or an isochronous structure. According to the Dynamic Attending Theory, external temporal regularities can entrain internal oscillators that guide attention over time, allowing for temporal expectations that influence perception of future events. Based on this framework, it was hypothesized that the metrical structure provides a benefit for artificial grammar learning in comparison to an isochronous presentation. Our study combined behavioral and event-related potential measurements. Behavioral results demonstrated similar learning in both participant groups. By contrast, analyses of event-related potentials showed a larger P300 component and an earlier N2 component for the strongly metrical group during the exposure phase and the test phase, respectively. These findings suggests that the temporal expectations in the strongly metrical condition helped listeners to better process the pitch dimension, leading to improved learning of the artificial grammar

    Temporal Semiotic Units as minimal meaningful units in music? An electrophysiological approach.

    Get PDF
    International audienceThe aim of this study was to determine whether conceptual priming occurs between successively presented short musical pieces called Temporal Semantic Units (TSUs). Behavioral and ERP data were recorded while participants, experts and nonexperts in TSUs, were listening to pairs of TSUs and were asked to determine whether the target TSU evoked the same or a different concept than the prime TSU. Target TSUs were either congruous (i.e., they developed the same musical concept as the prime TSUs) or incongruous (i.e., they started as congruous TSUs but shifted midstream into a different concept). Results showed that, whereas P3a components were elicited in both groups by the shifting into incongruous TSUs, thereby reflecting an automatic shift of attention when the changes occurred, P3b components were elicited in experts and N400-like components were found in nonexperts. The functional significance of these results is discussed in regard of previous results with environmental sounds

    Involvement of the larynx motor area in singing-voice perception: a TMS study

    Get PDF
    Recent evidence has reported that the motor system has a role in speech or emotional vocalization discrimination. In the present study we investigated the involvement of the larynx motor representation in singing perception. Twenty-one non-musicians listened to short tones sung by a human voice or played by a machine and performed a categorization task. Thereafter continuous theta-burst transcranial magnetic stimulation was applied over the right larynx premotor area or on the vertex and the test administered again. Overall, reaction times (RTs) were shorter after stimulation over both sites. Nonetheless and most importantly, RTs became longer for sung than for "machine" sounds after stimulation on the larynx area. This effect suggests that the right premotor region is functionally involved in singing perception and that sound humanness modulates motor resonance

    Perception de la parole et IRM : réalisation, évaluation et validation d'un système permettant une stimulation sonore de qualité en cours de séquence IRM

    No full text
    International audienceThis study describes the design and the assessment of a MRI-compatible sound production hardware. This system was developed to permit auditory studies with Magnetic Resonance Imaging (MRI) techniques. An important disadvantage caused by the MR imager is the acoustic noise generated during data acquisition, due to the fast gradient switching interacting with the main magnetic field. Several solutions were explored to reduce noise and to provide audio stimuli with a reasonable quality. The sound production system was first tested by instrumental methods (sound level, spectral analysis). Finally, perceptual tests consisting in intelligibility, semantic decision and prosodic judgement were achieved to validate the installation.Cette étude décrit la réalisation d'un dispositif de stimulation auditive pour un imageur à résonnance magnétique fonctionelle de 3 Tesla. Le plus important probléme de ces systémes réside dans l'émission d'un niveau de bruit considérable au cours de son fonctionnement qui les rend quasi impossible à utiliser pour des études en stimulation auditive.Plusieurs solutions de stimulation de réalisation locale sont proposées pour permettre de gérérer au tympan des sujet une stimulation de qualité raisonnable. Les stimulus ainsi générés sont d'abord testés au moyen de méthodes physiques classiques (analyses acoustiques). Ils sont ensuite testés au moyen de tests d'intelligibilité de décision sémantique et de jugement prosodique afin de les valider pour des études de psycho linguistique et psycho acoustique
    corecore